How to Escape Saddle Points Efficiently

نویسندگان

  • Chi Jin
  • Rong Ge
  • Praneeth Netrapalli
  • Sham M. Kakade
  • Michael I. Jordan
چکیده

This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost “dimension-free”). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

How to Escape Saddle Points Efficiently

In order to prove the main theorem, we need to show that the algorithm will not be stuck at any point that either has a large gradient or is a saddle point. This idea is similar to previous works (e.g.(Ge et al., 2015)). We first state a standard Lemma that shows if the current gradient is large, then we make progress in function value. Lemma 12. Assume f(·) satisfies A1, then for gradient desc...

متن کامل

Gradient Descent Can Take Exponential Time to Escape Saddle Points

Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape. On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et al., 2017] is not...

متن کامل

Basin constrained κ-dimer method for saddle point finding.

Within the harmonic approximation to transition state theory, the rate of escape from a reactant is calculated from local information at saddle points on the boundary of the state. The dimer minimum-mode following method can be used to find such saddle points. But as we show, dimer searches that are initiated from a reactant state of interest can converge to saddles that are not on the boundary...

متن کامل

Locating and Characterizing the Stationary Points of the Extended Rosenbrock Function

Two variants of the extended Rosenbrock function are analyzed in order to find the stationary points. The first variant is shown to possess a single stationary point, the global minimum. The second variant has numerous stationary points for high dimensionality. A previously proposed method is shown to be numerically intractable, requiring arbitrary precision computation in many cases to enumera...

متن کامل

Efficient approaches for escaping higher order saddle points in non-convex optimization

Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017